EN FR
EN FR


Section: New Results

Medical Imaging

Participants : René Anxionnat, Marie-Odile Berger, Nazim Haouchine, Erwan Kerrien, Matthieu Loosvelt, Pierre-Frédéric Villard, Brigitte Wrobel-Dautcourt, Ahmed Yureidini.

  • Interventional neuro radiology

    Minimally invasive techniques impact surgery in such ways that, in particular, an imaging modality is required to maintain a visual feedback. Live X-ray imaging, called fluoroscopy, is used in interventional neuroradiology. Such images are very noisy, and cannot show but the vasculature and no other brain tissue. Most of all, and despite recent progress on the sensors, X-rays are bad for the patient's health and X-ray images are 2D projections deprived of any depth hint such as occlusions or shading. To quote a fellow physician: “it is rather uncanny to use 2D images to perform a gesture that is, by nature, 3D”. Two of our long term aims in interventional neuroradiology are to reduce the operation time, and provide the interventional radiologists with a real-time visual feedback in 3D.

    All our research activity in this field is led in collaboration with the Department of Interventional Neuroradiology from Nancy University Hospital. This year was pivotal in this activity where some projects ended and other new projects started.

    We've been collaborating with Shacra Inria project-team (Lille-Nord Europe) in the context of the SOFA-InterMedS Inria Large-Scale Initiative for 4 years. Ahmed Yureidini is on the verge of defending his PhD thesis and the last step of his work consisted in validating the model he devised for the blood vasculature as a tree of local implicit surfaces [8] . Comparisons were made against simulations using triangular meshes against our implicit model and they showed a reduction by 2 orders of magnitude in computing time while numerical instabilities encountered with meshes (jaggy motions, unrealistic sticking of the catheter tip on the vessel surface, ...) were not observed with our implicit model. Publication of these results is under way.

    We also collaborate with Shacra team within the ANR IDeaS project. Computer simulations are very sensitive to inaccuracies in the various mechanical parameters or geometrical boundary conditions. Such inaccuracies are ubiquitous when dealing with patient-based data. We aim at developing Image-Driven Simulation to add the live X-ray images as new constraints to make the simulated surgical tool virtual visualization fit their position seen in the actual images. This year, a sensor was designed and tested to capture the motion of the line-shaped micro-tools (catheters, guidewires, etc...) and progress was made to design Kalman-like filters compliant with Sofa simulation platform.

    Our long-term collaboration with GE Healthcare took a new step this year with the arrival of Charlotte Delmas as a PhD student. She will work towards devising algorithms to reconstruct the micro-tools in 3D from fluoroscopy images.

  • Designing respiration models for patient based simulators

    Respiration models are useful in many ways. They can be used in: 1) pulmonary radiotherapy, where the tumor displacement should be accurately known to be targeted by ionizing radiation, 2) thoracic surgery simulators, where breathing motion increases the realism of virtual patients, 3) interventional radiology, where augmented medical imaging that incorporates breathing motion can be used during treatment.

    However building and parameterizing a fast and accurate respiration model is still an open problem. We continue this year to work on evolutionary methods to estimate the parameters of a complex 15-D respiration model on 5 patients [23] . A compound fitness function has been designed to take into account various quantities that have to be minimized.

    The optimized parameters have been applied to an interventional radiology simulator that takes into account the respiration [14] . It also includes: segmentation, physically based modeling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities (Liverpool, Manchester, Imperial College, Banghor, Leeds, Hull) involving computer scientists, clinicians, clinical engineers and occupational psychologists.

  • Realistic simulation of organ dissection Whilst laparoscopic surgical simulators are becoming increasingly realistic they can not, as yet, fully replicate the experience of live surgery. In particular tissue dissection is one task that is particularly challenging to replicate. Limitation of current attempts to simulate tissue dissection include: poor visual rendering; over simplification of the task and; unrealistic tissue properties. In an effort to generate a more realistic model of tissue dissection in laparoscopic surgery we worked on a novel method based on task analysis. Initially we have chosen to model only the basic geometrics of this task rather than a whole laparoscopic procedure. This year preliminary work has led to the development of a real time simulator performing organ dissection with a haptic thread at 1000Hz. 2D soft-tissue models replicate the process of tissue cutting.

  • Physics-based augmented reality

    The development of AR systems for use in the medical field faces one major challenge: the correct superposition of pre-operative data onto intraoperative images. This task is especially difficult when laparospic surgery is considered since superposition must be achieved on deformable organs. Most existing AR systems only consider rigid registration between the pre and intraoperative data and the transformation is often computed interactively or from markers attached to the patient's body.

    In cooperation with the Shacra team, we have proposed in [17] , [18] a framework for real-time augmentation of the vascular network and tumors during minimally invasive liver surgery. Internal structures computed from pre-operative CT scans can be overlaid onto the laparoscopic view for surgery guidance. Compared to state-of-the-art methods, our method uses a real-time biomechanical model to compute a volumetric displacement field from partial three-dimensional liver surface motion.

    The main contributions of this work are threefold: a) the use of a biomechanical model of liver deformation allows us to account for heterogeneity and anisotropy due to veins and arteries. In addition, the physical model is used as regularizer for the unreliable measurement of the visual tracking and as motion compensation in poorly textured areas; b) a real-time implementation of this virtual liver model has been proposed c) appropriate boundary conditions and external force have been defined which guide the biomechanical model using partial 3D motion estimated at the liver surface from a stereo video stream.

    Thanks to this framework, we are able to estimate, in real-time, relevant positions of internal structures of the liver (vessels and tumors) taking into account liver deformations and tissue heterogeneity.